callerID-SNNs
Howell: callerID Spiking Neural Networks (callerID-SNNs)
Table of Contents
Introduction
13Nov2023 This DRAFT webPage is still [incomplete, erroneous, redundant], but will evolve with time. However, my priority focus is on my [computer programs, testing, initial modelling], so updates of this webPage will be delayed!
In a nutshell - callerID-SNNs assume that the identity of a "source" neuron's spike can be readily identified by a "focus" neuron receiving it, and thereby be distinguished from the spiking by all other neurons connected to the "focus" neuron, and from noise. The "focus" neuron therefore has a relatively "clean" time-series of spiking information from all it's connected neurons that can be used for processing. While my earlier work focussed mostly on extra-neuron networks to do the processing, my priority focus is to look for genetic [code, mechanism]s that can [describe, explain, extend] neural networks, including the
If the "callerID-SNN" concept ever becomes applicable, it may provide a means to :
- dramatically reduce noise seen by a "focus" neuron
- dramatically reduce, and sometimes avoid, [biologically-unrealistic, huge, iterative] computations, such as for some (limited) aspects of [learning, STDP]
- If a dramatically simpler "a spike is a spike" concept is used that ignores spike [amplitude, waveform] and just uses spike [identification, timing], then computational costs are again reduced dramatically. However, as with other SNN models, this will require a means of handling [numbers, arithmetic, calculus, learning (eg gradient-based methods), evolution, etc].
- permit the direct application of [Turing, von Neuman]-like programming code for Spiking Neural Networks (SNNs). By itself, this would be a huge advantage that can still use the inherent advantages of conventional ANNs.
- tie SNNs to genetic [coding, mechanisms, processing] within each neuron. Perhaps this could :
- evolve towards useful tools for [[bio, psycho]ologists, neuro-scientists, etc]
- help identify program coding, as distinct from, or hybridized with, protein coding within [DNA, mRNA]. While this is mostly an issue for my MindCode project, callerID-SNNs fit nicely into, and may pragmatically help, that context.
- shine a light on the ontogeny of [neuron, network, brain]s in terms of [DNA, architecture, function, process, model, plan, optimize, ...]. "Tuning", as an example, may help explain processes such as retinal retinotopic mapping?
- greatly simplify evolutionary [mechanism, process]s for [individual, group]s of neurons
Detailed [description, specification]
The "callerID Spiking Neural Network" (callerID-SNN, or CID-SNN) is a "what-if" speculation that
- A spike received by a "focus" neuron from a connected "source" neuron produces a time-series of post-synaptic "blurbs" (for want of a better term), arising from each synapse connecting the "source" and "focus" neurons.
- post-synaptic "blurb" time series are assumed to occur at a time-scale that is at least an order of magnitude smaller than spiking time series. This is a BIG assumption, that assumes that intra-cellular gene processing can occur that fast. That seems to be contradicted by at least some of the data available, but I will look into this more.
- A "source" neuron's "blurb" time series remains the same "forever" for a given "focus" neuron.
- Post-synaptic "blurbs" may be membrane potential perturbations, like spiking itself, or a combination with other signalling within the "focus" neuron, possibly including :
- chains of transport biochemicals on microtubules
- very fast (on the edge of chaos?) water phase changes between [bulk, gel-like (EZ)] as per Gerald Pollack, Uof Washington (maybe retired)
- other : to be addressed at a later date
- Post-synaptic "blurb" time-series arising from all neurons connected to the "focus" neuron are pooled together in my current modelling. As with conventional "Artificial Neural Network" (ANN) modelling, there is not a network of direct processing paths from synapse to cellular mechanisms for processing a "source" neuron's spiking.
- The firing (spiking) of the "focus" neuron occurs according to several "multiple conflicting hypothesis" :
- threshold potential (may be variable) : the conventional calculus of changes in the "focus" neuron's membrane potential leads to a threshold potential at which firing occurs
- extra-neruon [Turing, von Neuman]-like computations based on the local neural network [structure, connection]s. This was a focus of my previous MindCode and earlier work (eg. Genetic specification of recurrent neural networks (draft version of a WCCI2006 conference paper), but isn't a currently active part of my work, as a priority for me is to search for a [Lamarckian, Mendellian] hereditary basis for neural networks, tied into cellular processes. This has long been a focus of Juyang Weng
- intra-neuron [Turing, von Neuman]-like computations based on the "focus" neuron's [DNA, RNA, methylation, sequence processing mechanisms]. This is a separate subject addressed by my MindCode 2023 concept.
- other : to be addressed at a later date (see the bullet-list below 'the "callerID-SNN" model ignores')
- data-independent "tuning" (as distinct from [[learn, train]ing, evolution]) of connected neurons is necessary to ensure that post-synaptic "blurb" times series as seen by EACH connected neuron are "sufficiently distinct" to ensure that the callerID can be established and there be [model, act]ed upon.
- random noise can largely be [filter, ignore]ed
At present, the callerID-SNN does not qualify as a hypothesis (certainly not as a theory), and my main effort for now is directed toward building software to
This callerID-SNN framework differs from standard [Artificial, Spiking] Neural Networks, but probably has been used in whole or in part before by others.
what is currently ignored by callerID-SNNs?
For now the "callerID-SNN" model ignores :
- The assumption that each "source" neuron's post-synaptic time series is a constant, isn't strictly true, but may be "pragmatic" as long as :
- the synapses and dendrites are of fixed geometry, which is NOT the case during ontogeny, or growth, learning, evolution] of the neuron and brain.
- neuro-[transmitter, modulator]s don't change (eg [dopamine, serotonin, noradrenaline, acetocholine, etc, etc])
- the state of the "focus" neuron doesn't change (eg [active, passive, other])
- we ignore [ontogeny, leaarning, evolution] of the "focus" and it's connected neurons
- local field potential in the vicinity of the "focus" neuron which are strong enough to affect post-synaptic "blurb" times series withing the "focus" neuron
- ontogeny, or growth, learning, evolution] of the neuron and brain : not currently included, and yet this is very much of interest for a future date!
mRNA program code causes neurons to fire?"
This section looks at some issues that arise from "what-if?" speculation (not a [hypothesis, theory] yet) that mRNA "programs" (in [contrast to, combination with] protein-building codons) are one way to cause the firing of neurons. This contrasts with the conventional [theory, modelling] that neuron firing occurs only when neuron membrane potential exceeds a threshold.
- [Turing, von Neuman]-like programming code within each neuron is the main reason for a neuron to fire.
- "Threshold membrane potential" indiced firing is assumed to occur, but as one of "multiple conflicting hypothesis". For now, spiking-by-excess-potential is seen partially as a "safety release' or "reset" for a neuron.
- "Spike Time Dependent Plasticity" (STDP) is accounted for by .
- a "synaptic blurb" layer is added to track post-synaptic "blurbs" (odd name for now - used to avoid unwanted connotations, stands for when a synapse "releases"
- the "synaptic blurb" layer operates on a timescale that is a fraction of normal neuron spiking
- the "synaptic blurb" time series auto-identify connected neurons, making it somewhat easy to [distinguish, track] the firing of a specific connected neuron. Random synaptic noise is always part of the synSeq that a neuron sees.
- simple "filter-by-synaptic blurb pattern" is a first step at ignoring [noise, unwanted or inhibited] signals
- "tuning" of the synapse [position, timing] is a requirement to ensure that all connected neurons can be differentiated. Techniques from Shannon information theory and later are presumably available for doing that, but I haven't touched that yet. For now the synaptic patterns for each connected neuron are preset, and are UNIQUE to the receiving neuron (other connected neurons with have different signals).
- timeScales at multiples of the [basic, smallest] time unit that is able to distinguish "synaptic blurbs" (tSyn) are used for further processing
random thoughts
- knowing the callerID of connected neurons, one can potentially use agent-based typed programming, with message-passing for [repeat-to-be-sure, reply confirmation, query, change in [action, priority]]
- "multiple conflicting hypothesis" is an [arbitrary, intended] property of callerID-SNNs???
- "first past the post" selection of actions is the simplest approach, others are intended at some time in the future
- neurotransmitters like [dopamine, serotonon, noradrenalin, acetocholine, etc]
- inhibition, excitation...
- constantly changing [axon, dendrite, synapse]s in real biology
- adapting to now-dead or erratic neurons, [physical, chemical, biological] damage, etc
I have frequently resorted to what I call a "Z80" approach to building this initial system, using a conventional programming language (QNial) rather than designing neuron microcircuits from scratch for everything. Z80 refers to an old microprocessor that I did some machine coding (hex) on a long time ago (I've forgotten everything.)
To me, there is probably nothing [new, unique] in my work on callerID-SNNs, it has all been done before. The problem is to take the time to track down the series of contributors, some of whome are mentioned here and there.
Is there any biological plausibility?"
13Nov2023 this is hugely incomplete AND essential for me, but it will take time before I can take time to draft this out more..
Given that the underlying framework of callerID-SNNs departs from "known" neuron [data, principles], ???
Mojtaba Madadi Asl, Alireza Valizadeh, Peter A. Tass 2017 "Dendritic and Axonal Propagation Delays Determine Emergent Structures of Neuronal Networks with Plastic Synapses" Scientific Reports volume 7, Article number: 39682 https://www.nature.com/articles/srep39682
#] 10Sep2020 Recent peer review that I did has great pertinence! :
NEUNET-D-20-00790 p Garcia etal - Small Universal Spiking Neural P Systems with dendritic-axonal delays and dendritic trunk-feedback.pdf
- this supports my concept of in-neuron processing (functions etc)?
Future objectives
tie-in with Grossberg's 2021 'Conscious Mind, Resonant Brain'
An mid-term objective is to tie caller-IDs to the work of Stephen Grossberg as described in my webPage Overview - Stephen Grossberg's 2021 "Conscious Mind, Resonant Brain". Gail Carpenter worked with his concepts from the Spiking Neural Network perspective. Theresa Ludemuir (???), Jose Principe (Reproducing Kernel Hilbert Spaces), and others have also done interesting work with SNNs, but not tied to Grossberg's framework.
My most important objective is to tie this work into my long-term interest in MindCode, linking [Lamarckian, Mendellian] heredity based on [DNA, RNA, mRNA, methylation, etc] to [ontogeny, architecture, function, process, operating system, consciousness, optimization, etc]. Surely I will be dead long before I get there...
fractal [dendrite, axon]s
For now, I can't find my earlier musings (see very incomplete Fractal notes. as I remember, the plan was to build fractal dendrites (as the main inter-neuron synaptic information transfer) for callerID-SNNs. Axons as well, but perhaps more specialised for [power transmission or something.
10Nov2023 Maybe I can use a prime number basis for [time, synapse] fractals, as a contrast to Stephen Puetz's "Universal Wave Series" amazing "factor-of-three" series, combined with his half series. For example, with roughly a factor-of-three [1, 3, 7, 23, ...], or maybe factor-of-two or just all primes.
Links to my related work
Related links to some of my work are provided below. All of this is in very early-stage development even though some of it has been worked on several times since the late 1990's, early 2000s :